48 research outputs found

    Dissociable Influences of Auditory Object vs. Spatial Attention on Visual System Oscillatory Activity

    Get PDF
    Given that both auditory and visual systems have anatomically separate object identification (“what”) and spatial (“where”) pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory “what” vs. “where” attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic (“what”) vs. spatial (“where”) aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7–13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400–600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity (“what”) vs. sound location (“where”). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during “what” vs. “where” auditory attention

    MEG Source Localization Using Invariance of Noise Space

    Get PDF
    <div><p>We propose INvariance of Noise (INN) space as a novel method for source localization of magnetoencephalography (MEG) data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.</p> </div

    Localization of Class 2 real auditory evoked responses using the different methods.

    No full text
    <p>The red points indicate the peaks of the cost functions. The threshold was set to 80% of peak of the cost function within the corresponding source region. To show the underlying anatomical structure, the transparency of the overlaid images was set to 50%. INN identified sources at the left and right supratemporal auditory cortices. MUSIC and BEAMFORMER only detected a source in the left auditory cortex. RAP-MUSIC (2<sup>nd</sup> recursion) misplaced a false source at midline.</p
    corecore